17 research outputs found

    Haptic Guidance for Teleoperation: Optimizing Performance and User Experience

    Get PDF
    Haptic guidance in teleoperation (e.g. of robotic systems) is a pioneering approach to successfully combine automation and human competencies. In the current user study, various forms of haptic guidance were evaluated in terms of user performance and experience. Twenty-six participants completed an obstacle avoidance task and a peg-in-hole task in a virtual environment using a seven DoF force feedback device. Three types of haptic guidance (translational, rotational, combination of both, i.e. 6 DoF) and three guidance forces and torques (stiffnesses) were compared. Moreover, a secondary task paradigm was utilized to explore the effects of additional cognitive load. The results show that haptic guidance significantly improves performance (i.e. completion times, collision forces). Best results were obtained when the guidance forces were set to a medium or high value. Additionally, feelings of control were significantly increased during higher cognitive load conditions when being supported by translational haptic guidance

    Light-field head-mounted displays reduce the visual effort: A user study

    Get PDF
    Head-mounted displays (HMD) allow the visualization of virtual content and the change of view perspectives in a virtual reality (VR). Besides entertainment purposes, such displays also find application in augmented reality, VR training or tele-robotic systems. The quality of visual feedback plays a key role for the interaction performance in such setups. In the last years, high-end computers and displays led to the reduction of simulator sickness regarding nausea symptoms, while new visualization technologies are required to further reduce oculomotor and disorientation symptoms. The so-called vergence-accommodation conflict (VAC) in standard stereoscopic displays prevents intense use of 3D displays, so far. The VAC describes the visual mismatch between the projected stereoscopic 3D image and the optical distance to the HMD screen. This conflict can be solved by using displays with correct focal distance. The light-field HMD of this study provides a close-to-continuous depth and high image resolution enabling a highly natural visualization. This paper presents the first user-study on the visual comfort of light-field displays with a close-to-market HMD based on complex interaction tasks. The results provide first evidence that the light-field technology brings clear benefits to the user in terms of physical use comfort, workload and depth matching performance

    Audio Perception in Robotic Assistance for Human Space Exploration: A Feasibility Study

    Get PDF
    Future crewed missions beyond low earth orbit will greatly rely on the support of robotic assistance platforms to perform inspection and manipulation of critical assets. This includes crew habitats, landing sites or assets for life support and operation. Maintenance and manipulation of a crewed site in extra-terrestrial environments is a complex task and the system will have to face different challenges during operation. While most may be solved autonomously, in certain occasions human intervention will be required. The telerobotic demonstration mission, Surface Avatar, led by the German Aerospace Center (DLR), with partner European Space Agency (ESA), investigates different approaches offering astronauts on board the International Space Station (ISS) control of ground robots in representative scenarios, e.g. a Martian landing and exploration site. In this work we present a feasibility study on how to integrate auditory information into the mentioned application. We will discuss methods for obtaining audio information and localizing audio sources in the environment, as well as fusing auditory and visual information to perform state estimation based on the gathered data. We demonstrate our work in different experiments to show the effectiveness of utilizing audio information, the results of spectral analysis of our mission assets, and how this information could help future astronauts to argue about the current mission situation

    Extending the Knowledge Driven Approach for Scalable Autonomy Teleoperation of a Robotic Avatar

    Get PDF
    Crewed missions to celestial bodies such as Moon and Mars are in the focus of an increasing number of space agencies. Precautions to ensure a safe landing of the crew on the extraterrestrial surface, as well as reliable infrastructure on the remote location, for bringing the crew back home are key considerations for mission planning. The European Space Agency (ESA) identified in its Terrae Novae 2030+ roadmap, that robots are needed as precursors and scouts to ensure the success of such missions. An important role these robots will play, is the support of the astronaut crew in orbit to carry out scientific work, and ultimately ensuring nominal operation of the support infrastructure for astronauts on the surface. The METERON SUPVIS Justin ISS experiments demonstrated that supervised autonomy robot command can be used for executing inspection, maintenance and installation tasks using a robotic co-worker on the planetary surface. The knowledge driven approach utilized in the experiments only reached its limits when situations arise that were not anticipated by the mission design. In deep space scenarios, the astronauts must be able to overcome these limitations. An approach towards more direct command of a robot was demonstrated in the METERON ANALOG-1 ISS experiment. In this technical demonstration, an astronaut used haptic telepresence to command a robotic avatar on the surface to execute sampling tasks. In this work, we propose a system that combines supervised autonomy and telepresence by extending the knowledge driven approach. The knowledge management is based on organizing the prior knowledge of the robot in an object-centered context. Action Templates are used to define the knowledge on the handling of the objects on a symbolic and geometric level. This robot-agnostic system can be used for supervisory command of any robotic coworker. By integrating the robot itself as an object into the object-centered domain, robot-specific skills and (tele-)operation modes can be injected into the existing knowledge management system by formulating respective Action Templates. In order to efficiently use advanced teleoperation modes, such as haptic telepresence, a variety of input devices are integrated into the proposed system. This work shows how the integration of these devices is realized in a way that is agnostic to the input devices and operation modes. The proposed system is evaluated in the Surface Avatar ISS experiment. This work shows how the system is integrated into a Robot Command Terminal featuring a 3-Degree-of-Freedom Joystick and a 7-Degree-of-Freedom haptic input device in the Columbus module of the ISS. In the preliminary experiment sessions of Surface Avatar, two astronauts on orbit took command of the humanoid service robot Rollin' Justin in Germany. This work presents and discusses the results of these ISS-to-ground sessions and derives requirements for extending the scalable autonomy system for the use with a heterogeneous robotic team

    On Realizing Multi-Robot Command through Extending the Knowledge Driven Teleoperation Approach

    Get PDF
    Future crewed planetary missions will strongly depend on the support of crew-assistance robots for setup and inspection of critical assets, such as return vehicles, before and after crew arrival. To efficiently accomplish a high variety of tasks, we envision the use of a heterogeneous team of robots to be commanded on various levels of autonomy. This work presents an intuitive and versatile command concept for such robot teams using a multi-modal Robot Command Terminal (RCT) on board a crewed vessel. We employ an object-centered prior knowledge management that stores the information on how to deal with objects around the robot. This includes knowledge on detecting, reasoning on, and interacting with the objects. The latter is organized in the form of Action Templates (ATs), which allow for hybrid planning of a task, i.e. reasoning on the symbolic and the geometric level to verify the feasibility and find a suitable parameterization of the involved actions. Furthermore, by also treating the robots as objects, robot-specific skillsets can easily be integrated by embedding the skills in ATs. A Multi-Robot World State Representation (MRWSR) is used to instantiate actual objects and their properties. The decentralized synchronization of the MRWSR of multiple robots supports task execution when communication between all participants cannot be guaranteed. To account for robot-specific perception properties, information is stored independently for each robot, and shared among all participants. This enables continuous robot- and command-specific decision on which information to use to accomplish a task. A Mission Control instance allows to tune the available command possibilities to account for specific users, robots, or scenarios. The operator uses an RCT to command robots based on the object-based knowledge representation, whereas the MRWSR serves as a robot-agnostic interface to the planetary assets. The selection of a robot to be commanded serves as top-level filter for the available commands. A second filter layer is applied by selecting an object instance. These filters reduce the multitude of available commands to an amount that is meaningful and handleable for the operator. Robot-specific direct teleoperation skills are accessible via their respective AT, and can be mapped dynamically to available input devices. Using AT-specific parameters provided by the robot for each input device allows a robot-agnostic usage, as well as different control modes e.g. velocity, model-mediated, or domain-based passivity control based on the current communication characteristics. The concept will be evaluated on board the ISS within the Surface Avatar experiments

    Model-Augmented Haptic Telemanipulation: Concept, Retrospective Overview, and Current Use Cases

    Get PDF
    Certain telerobotic applications, including telerobotics in space, pose particularly demanding challenges to both technology and humans. Traditional bilateral telemanipulation approaches often cannot be used in such applications due to technical and physical limitations such as long and varying delays, packet loss, and limited bandwidth, as well as high reliability, precision, and task duration requirements. In order to close this gap, we research model-augmented haptic telemanipulation (MATM) that uses two kinds of models: a remote model that enables shared autonomous functionality of the teleoperated robot, and a local model that aims to generate assistive augmented haptic feedback for the human operator. Several technological methods that form the backbone of the MATM approach have already been successfully demonstrated in accomplished telerobotic space missions. On this basis, we have applied our approach in more recent research to applications in the fields of orbital robotics, telesurgery, caregiving, and telenavigation. In the course of this work, we have advanced specific aspects of the approach that were of particular importance for each respective application, especially shared autonomy, and haptic augmentation. This overview paper discusses the MATM approach in detail, presents the latest research results of the various technologies encompassed within this approach, provides a retrospective of DLR's telerobotic space missions, demonstrates the broad application potential of MATM based on the aforementioned use cases, and outlines lessons learned and open challenges

    Introduction to Surface Avatar: the First Heterogeneous Robotic Team to be Commanded with Scalable Autonomy from the ISS

    Get PDF
    Robotics is vital to the continued development toward Lunar and Martian exploration, in-situ resource utilization, and surface infrastructure construction. Large-scale extra-terrestrial missions will require teams of robots with different, complementary capabilities, together with a powerful, intuitive user interface for effective commanding. We introduce Surface Avatar, the newest ISS-to-Earth telerobotic experiment series, to be conducted in 2022-2024. Spearheaded by DLR, together with ESA, Surface Avatar builds on expertise on commanding robots with different levels of autonomy from our past telerobotic experiments: Kontur-2, Haptics, Interact, SUPVIS Justin, and Analog-1. A team of four heterogeneous robots in a multi-site analog environment at DLR are at the command of a crew member on the ISS. The team has a humanoid robot for dexterous object handling, construction and maintenance; a rover for long traverses and sample acquisition; a quadrupedal robot for scouting and exploring difficult terrains; and a lander with robotic arm for component delivery and sample stowage. The crew's command terminal is multimodal, with an intuitive graphical user interface, 3-DOF joystick, and 7-DOF input device with force-feedback. The autonomy of any robot can be scaled up and down depending on the task and the astronaut's preference: acting as an avatar of the crew in haptically-coupled telepresence, or receiving task-level commands like an intelligent co-worker. Through crew performing collaborative tasks in exploration and construction scenarios, we hope to gain insight into how to optimally command robots in a future space mission. This paper presents findings from the first preliminary session in June 2022, and discusses the way forward in the planned experiment sessions

    Implementierung einer echtzeitfähigen Simulation eines hochelastischen Fluggeräts mithilfe vereinfachter Modellierungsmethoden

    No full text
    Diese Arbeit befasst sich mit einer echtzeitfähigen Simulation eines hochelastischen Flügels bzw. eines Fluggeräts mit hochelastischen Flügeln. Für nicht starre Flügel ist eine starke Beeinflussung zwischen dessen struktureller Verformung und aerodynamischen Kräften gegeben. Der Flügel wird dafür in einzelne starre Elemente (Lumped-Mass) unterteilt. Mit Feder-Dämpfer-Kombinationen werden die strukturellen Eigenschaften wiedergegeben. Die aerodynamische Berechnung des Tragflügels erfolgt über eine nichtlineare Traglinienmethode. Die mechanische Simulation ver- wendet die C++-Bibliothek Simbody für die Berechnung der Bewegungsgleichungen. Mit dem Fokus auf die Echtzeitfähigkeit wurden Performanceoptimierungen des Simulationsalgorithmus durchgeführt und beurteilt. Darüber hinaus wurde die Simulation auf struktureller und aerodynamischer Ebene verifiziert und mit einem Experiment verglichen. Der Algorithmus wurde so gestaltet, dass während der Simulation einige Parameter wie die Anströmung, externe Kräfte und Materialparameter manipuliert werden können. Diese Arbeit dient als Grundlage für eine gesamte Simulation eines Fluggeräts und hat den Zweck, Steuer- und Regelungsalgorithmen, Auslegungsparameter und Flugverhalten einzuschätzen und zu testen

    Comprehensive genomic analysis identifies MDMF2 and AURKA as novel amplified genes in juvenile angiofibromas.

    No full text
    7nonenoneSCHICK B; WEMMERT S; BECHTEL U; P. NICOLAI; HOFMANN T; GOLABEK W; URBSCHAT SSchick, B; Wemmert, S; Bechtel, U; Nicolai, Piero; Hofmann, T; Golabek, W; Urbschat, S
    corecore